An Essay about Markov

نویسنده

  • Daniel W. Stroock
چکیده

Unless one is clairvoyant, the only temporally evolving processes which are tractable are those whose future behavior can be predicted on the basis of data which is available at the time when the prediction is being made. Of course, in general, the behavior of even such an evolution will be impossible to predict. For example, if, in order to make a prediction, one has to know the detailed history of everything that has happened during the entire history of the entire universe, ones chance of making a prediction may be a practical, if not a theoretical, impossibility. For this reason, one tries to study evolutions mathematically with models in which most of the distant past can be ignored when making predictions about the future. In fact, many mathematical models of evolutions have the property that, for the purpose of predicting the future, the past becomes irrelevant as soon as one knows the present, in which case the evolution is said to be a Markov process, the topic at hand. The components of a Markov process are its state space S and its transition rule T . Mathematically, S is just some non-empty set, which in applications will encode all the possible states in which the evolving system can find itself, and T : S −→ S is a function from S into itself which gives the transition rule. More precisely, if now the system is in state x, it will be next in state T (x), from which it will go to T (x) = T (T (x)), etc. To give a sense of the sort of reasoning required to construct a Markov process, consider a (classical) physical particle whose motion is governed by Newton’s equation ~ F = m~a (“force equals mass times acceleration”). At least in theory, Newton’s equation says that, assuming one knows the mass of the particle and the force field ~ F which acts on it, one can predict where the particle will be in the future as soon as one knows what its position and velocity are now. On the other hand, knowing only its present position is not sufficient by itself. Thus, even though one may care about nothing but its position, in order to produce a Markov process for a particle evolving according to Newton’s equation, it is necessary to adopt the attitude that the state of the particle consists of its position and velocity, not just its position alone. Of course, in that velocity is the derivative of position, the two are so inextricably intertwined that one might be tempted to concentrate on position on the grounds that one will be able to compute the velocity whenever necessary. However, this

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Comparison of the scores in essay part and MCQ part of renal pathophysiology exam and surveying students’ views about the effect of such exams on their study

Introduction: In order to benefit from the advantages of essay exams, one must be sure to reliably judge students based on essay test scores. The aim of this study was to examine the correlation of scores in the essay and MCQ parts of renal pathophysiology final exam and students’ views about the effect of the type of test on their study. Methods: This descriptive correlational survey wa...

متن کامل

Stochastic Analysis of Maintenance and Routing Policies in Queueing Systems

This dissertation focuses on reexamining traditional management problems that emerge in service systems where customers or jobs queue for service. In particular, we investigate how a manger should make maintenance and routing decisions in settings where there is a departure from traditional modeling assumptions. In many cases, the performance evaluation of a management problems has, at its hear...

متن کامل

AN APPLICATION OF TRAJECTORIES AMBIGUITY IN TWO-STATE MARKOV CHAIN

In this paper, the ambiguity of nite state irreducible Markov chain trajectories is reminded and is obtained for two state Markov chain. I give an applicable example of this concept in President election

متن کامل

MCMC Methods for Functions: Modifying Old Algorithms to Make Them Faster

Markov Chain Monte Carlo methods on function spaces are useful, for example to solve inverse problems. Classical methods su er from poor performance on function space, which makes modi cations of them necessary. This essay provides an overview of certain dimension-independent methods. Discussed are applications, examples, theoretical underpinnings, and the mathematical properties behind these m...

متن کامل

Mapping Activity Diagram to Petri Net: Application of Markov Theory for Analyzing Non-Functional Parameters

The quality of an architectural design of a software system has a great influence on achieving non-functional requirements of a system. A regular software development project is often influenced by non-functional factors such as the customers' expectations about the performance and reliability of the software as well as the reduction of underlying risks. The evaluation of non-functional paramet...

متن کامل

ENTROPY FOR DTMC SIS EPIDEMIC MODEL

In this paper at rst, a history of mathematical models is given.Next, some basic information about random variables, stochastic processesand Markov chains is introduced. As follows, the entropy for a discrete timeMarkov process is mentioned. After that, the entropy for SIS stochastic modelsis computed, and it is proved that an epidemic will be disappeared after a longtime.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2005